A Continuous Exact ℓ0 Penalty (CEL0) for Least Squares Regularized Problem
نویسندگان
چکیده
Lemma 4.4 in [E. Soubies, L. Blanc-Féraud and G. Aubert, SIAM J. Imaging Sci., 8 (2015), pp. 1607–1639] is wrong for local minimizers of the continuous exact `0 (CEL0) functional. The argument used to conclude the proof of this lemma is not sufficient in the case of local minimizers. In this note, we supply a revision of this lemma where new results are established for local minimizers. Theorem 4.8 in that paper remains unchanged but the proof has to be rewritten according to the new version of the lemma. Finally, some remarks of this paper are also rewritten using the corrected lemma.
منابع مشابه
Using an Efficient Penalty Method for Solving Linear Least Square Problem with Nonlinear Constraints
In this paper, we use a penalty method for solving the linear least squares problem with nonlinear constraints. In each iteration of penalty methods for solving the problem, the calculation of projected Hessian matrix is required. Given that the objective function is linear least squares, projected Hessian matrix of the penalty function consists of two parts that the exact amount of a part of i...
متن کاملGroup Sparse Recovery via the ℓ0(ℓ2) Penalty: Theory and Algorithm
In this work we propose and analyze a novel approach for recovering group sparse signals, which arise naturally in a number of practical applications. It is based on regularized least squares with an `(`) penalty. One distinct feature of the new approach is that it has the built-in decorrelation mechanism within each group, and thus can handle the challenging strong inner-group correlation. We ...
متن کاملSuperlinearly convergent exact penalty projected structured Hessian updating schemes for constrained nonlinear least squares: asymptotic analysis
We present a structured algorithm for solving constrained nonlinear least squares problems, and establish its local two-step Q-superlinear convergence. The approach is based on an adaptive structured scheme due to Mahdavi-Amiri and Bartels of the exact penalty method of Coleman and Conn for nonlinearly constrained optimization problems. The structured adaptation also makes use of the ideas of N...
متن کاملThe Trimmed Lasso: Sparsity and Robustness
Nonconvex penalty methods for sparse modeling in linear regression have been a topic of fervent interest in recent years. Herein, we study a family of nonconvex penalty functions that we call the trimmed Lasso and that offers exact control over the desired level of sparsity of estimators. We analyze its structural properties and in doing so show the following: 1. Drawing parallels between robus...
متن کاملQuelles relaxations continues pour le critère l 2 − l 0 ?
For more than two decades, several continuous (and generally separable) penalties approximating (relaxing) the `0-pseudo norm have been proposed. Although some “good” properties for such penalties have been highlighted, the choice of one relaxation rather than another one remains unclear. One approach to compare them is to investigate their fidelity to the initial problem. In other words, do th...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- SIAM J. Imaging Sciences
دوره 8 شماره
صفحات -
تاریخ انتشار 2015